36 - Beyond the Patterns - Gitta Kutyniok – Deep Neural Networks: The Mystery of Generalization [ID:35725]
50 von 666 angezeigt

Welcome everybody to another episode of Beyond the Patterns.

Today I have the great pleasure to welcome Gitta Kutinov with us.

She was educated in Detmold and in 1996 earned a diploma in mathematics and computer science

at Paderborn University.

She completed her doctorate at Paderborn in 2000.

Her dissertation, Time-Frequency Analysis on Locally Compact Groups, was supervised

by Eberhard Kahnnut.

From 2000 to 2008, she held a short-term position at Paderborn University, also the Georgia

Institute of Technology, the University of Gießen, Washington University in St. Louis,

Princeton University, Stanford University and Yale University.

In 2006 she earned her habitation in Gießen and in 2008 she became a full professor at

Osnabrück University.

2011 she was given the Einstein Chair at the Technical University of Berlin.

In 2018 she added courtesy affiliations with Computer Science and Electrical Engineering

at TU Berlin and an adjunct faculty position at the University of Tromsø.

In 2020 she moved to the Ludwig Maximilian University of Munich where she holds a Bavarian

AI Chair.

So it's really a great pleasure to have you here, a very well known and renowned researcher

and today her presentation is entitled Deep Neural Networks, the mystery of generalization.

It's really great to see that there is more theory in mathematics going into the field

of deep learning where we see all these fancy applications coming up but it's also great

to see that people are working on solid theory as well.

So Gitta, it's really great to have you here and I'm very much looking forward to your

presentation so the stage is yours.

Thank you very much Andreas for the very nice introduction and I would also like to thank

you very much for the invitation.

It's certainly a great pleasure and honor for me to give a talk in this seminar series.

I think we all know how tremendously successful deep neural networks are but we also know

I mean that there is still a tremendous lack of a theoretical understanding and so in this

time we will focus on one particular aspect of that namely generalization and we'll shed

a bit of light in on how we can actually understand it a bit better from a theoretical view.

Now as I said I mean if we look at applications of deep learning we see that they are all

around us it will be significantly increased even in the future so think for instance of

self-driving cars, think of telecommunications, speech recognition, most of you have a cell

phone so you already use these technologies.

In the US I mean legal issues are already solved by these type of approaches for instance

job applications are always or not always but often pre-screened by neural networks

and the question is how it will evolve in Europe.

And then the whole healthcare sector which became unfortunately even more important these

days than it already is also there these methods are starting to be used for diagnosis in order

for reaching critical decisions.

And then if we get a bit closer to science I mean we see that also there I mean we witnessed

the spectacular success of these approaches so here for instance just a couple of months

ago you could read this article it was about a new deep learning based algorithm which

is called AlphaFold2 and the article says it will change everything makes gigantic leap

in solving protein structures and I mean I think graphics of this shows that indeed I

mean it's really a significant improvement over what you could achieve before.

And then when it comes a bit closer to let's say my area so I come from mathematics and

work at the intersection of mathematics also computer science also there I mean if you

look at the area of inverse problems in particular imaging sciences so questions like denoising

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:15:06 Min

Aufnahmedatum

2021-07-13

Hochgeladen am

2021-07-13 18:56:08

Sprache

en-US

We are very proud to welcome Gitta Kutyniok from LMU Munich to our lab!

Abstract: One or maybe the main reason for the impressive success of deep neural networks in both public life and science is their amazing generalization ability, namely their performance on unseen data. However, this phenomenon is still to a large extent a mystery.

In this talk, we will provide an introduction to this problem and discuss some recent advances. We will then focus on graph convolutional neural networks and show how to unravel part of the mystery in this situation completely.

Short Bio: Kutyniok was educated in Detmold, and in 1996 earned a diploma in mathematics and computer science at Paderborn University. She completed her doctorate (Dr. rer. nat.) at Paderborn in 2000. Her dissertation, Time-Frequency Analysis on Locally Compact Groups, was supervised by Eberhard Kaniuth.

From 2000 to 2008 she held short-term positions at Paderborn University, the Georgia Institute of Technology, the University of Giessen, Washington University in St. Louis, Princeton University, Stanford University, and Yale University. In 2006 she earned her habilitation in Giessen, in 2008 she became a full professor at Osnabrück University, and in 2011 she was given the Einstein Chair at the Technical University of Berlin. In 2018 she added courtesy affiliations with computer science and electrical engineering at TU Berlin and an adjunct faculty position at the University of Tromsø.. In October 2020 she moved to the Ludwig Maximilian University of Munich, where she holds a Bavarian AI Chair.

This video is released under CC BY 4.0. Please feel free to share and reuse.

For reminders to watch the new video follow on Twitter or LinkedIn. Also, join our network for information about talks, videos, and job offers in our Facebook and LinkedIn Groups.

Music Reference: 
Damiano Baldoni - Thinking of You (Intro)
Damiano Baldoni - Poenia (Outro)

Tags

beyond the patterns
Einbetten
Wordpress FAU Plugin
iFrame
Teilen